19 research outputs found
A Bayesian perspective on classical control
The connections between optimal control and Bayesian inference have long been
recognised, with the field of stochastic (optimal) control combining these
frameworks for the solution of partially observable control problems. In
particular, for the linear case with quadratic functions and Gaussian noise,
stochastic control has shown remarkable results in different fields, including
robotics, reinforcement learning and neuroscience, especially thanks to the
established duality of estimation and control processes. Following this idea we
recently introduced a formulation of PID control, one of the most popular
methods from classical control, based on active inference, a theory with roots
in variational Bayesian methods, and applications in the biological and neural
sciences. In this work, we highlight the advantages of our previous formulation
and introduce new and more general ways to tackle some existing problems in
current controller design procedures. In particular, we consider 1) a
gradient-based tuning rule for the parameters (or gains) of a PID controller,
2) an implementation of multiple degrees of freedom for independent responses
to different types of signals (e.g., two-degree-of-freedom PID), and 3) a novel
time-domain formalisation of the performance-robustness trade-off in terms of
tunable constraints (i.e., priors in a Bayesian model) of a single cost
functional, variational free energy.Comment: 8 pages, Accepted at IJCNN 202
Recommended from our members
The dark room problem in predictive processing and active inference, a legacy of cognitivism?
The free energy principle describes cognitive functions such as perception, action, learning and attention in terms of surprisal minimisation. Under simplifying assumptions, agents are depicted as systems minimising a weighted sum of prediction errors encoding the mismatch between incoming sensations and an agent’s predictions about such sensations. The “dark room” is defined as a state that an agent would occupy should it only look to minimise this sum of prediction errors.
This (paradoxical) state emerges as the contrast between the attempts to describe the richness of human and animal behaviour in terms of surprisal minimisation and the trivial solution of a dark room, where the complete lack of sensory stimuli would provide the easiest way to minimise prediction errors, i.e., to be in a perfectly predictable state of darkness with no incoming stimuli. Using a process theory derived from the free energy principle, active inference, we investigate with an agent-based model the meaning of the dark room problem and discuss some of its implications for natural and artificial systems. In this set up, we propose that the presence of this paradox is primarily due to the long-standing belief that agents should encode accurate world models, typical of traditional (computational) theories of cognition
An active inference implementation of phototaxis
Active inference is emerging as a possible unifying theory ofperception and action in cognitive and computational neuro-science. On this theory, perception is a process of inferringthe causes of sensory data by minimising the error betweenactual sensations and those predicted by an innergenerative(probabilistic) model. Action on the other hand is drawn as aprocess that modifies the world such that the consequent sen-sory input meets expectations encoded in the same internalmodel. These two processes, inferring properties of the worldand inferring actions needed to meet expectations, close thesensory/motor loop and suggest a deep symmetry betweenaction and perception. In this work we present a simpleagent-based model inspired by this new theory that offers in-sights on some of its central ideas. Previous implementationsof active inference have typically examined a “perception-oriented” view of this theory, assuming that agents are en-dowed with a detailed generative model of their surround-ing environment. In contrast, we present an “action-oriented”solution showing how adaptive behaviour can emerge evenwhen agents operate with a simple model which bears littleresemblance to their environment. We examine how variousparameters of this formulation allow phototaxis and presentan example of a different, “pathological” behaviour
Nonmodular architectures of cognitive systems based on active inference
In psychology and neuroscience it is common to describe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches to cognitive science have challenged the idealisation of the brain as an input/output device, we argue that even the more recent attempts to model systems using closed-loop architectures still heavily rely on a strong separation between motor and perceptual functions. Previously, we have suggested that the mainstream notion of modularity strongly resonates with the separation principle of control theory. In this work we present a minimal model of a sensorimotor loop implementing an architecture based on the separation principle. We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent. These forces can be seen as variables that an agent cannot directly control, i.e., a perturbation from the environment or an interference caused by other agents. As an alternative approach inspired by embodied cognitive science, we then propose a nonmodular architecture based on active inference. We demonstrate the robustness of this architecture to unknown external inputs and show that the mechanism with which this is achieved in linear models is equivalent to integral control
The modularity of action and perception revisited using control theory and active inference
The assumption that action and perception can be investigated independently is entrenched in theories, models and experimental approaches across the brain and mind sciences. In cognitive science, this has been a central point of contention between computationalist and 4Es (enactive, embodied, extended and embedded) theories of cognition, with the former embracing the “classical sandwich”, modular, architecture of the mind and the latter actively denying this separation can be made. In this work we suggest that the modular independence of action and perception strongly resonates with the separation principle of control theory and furthermore that this principle provides formal criteria within which to evaluate the implications of the modularity of action and perception. We will also see that real-time feedback with the environment, often considered necessary for the definition of 4Es ideas, is not however a sufficient condition to avoid the “classical sandwich”. Finally, we argue that an emerging framework in the cognitive and brain sciences, active inference, extends ideas derived from control theory to the study of biological systems while disposing of the separation principle, describing non-modular models of behaviour strongly aligned with 4Es theories of cognition
Recommended from our members
Active inference: building a new bridge between control theory and embodied cognitive science
The application of Bayesian techniques to the study and computational modelling of biological systems is one of the most remarkable advances in the natural and cognitive sciences over the last 50 years. More recently, it has been proposed that Bayesian frameworks are not only useful for building descriptive models of biological functions, but that living systems themselves can be seen as Bayesian (inference) machines. On this view, the statistical tools more traditionally used to account for data in biology, neuroscience and psychology, are now used to model the mechanisms underlying functions and properties of living systems as if the systems themselves were the ones“calculating”those probabilities following Bayesian inference schemes. The free energy principle (FEP) is a framework proposed in light of this paradigm shift, advocating the minimisation of variational free energy, a proxy for sensory surprisal, as a general computational principle for biological systems. More intuitively and under some simplifying assumptions,the minimisation of variational free energy reduces,for an agent,to the minimisation of prediction errors on sensory input. Initially proposed as a candidate unifying theory of brain functioning, the FEP was later extended to encompass hypotheses on the origins of life, and is nowadays discussed in the cognitive science community for its possible implications for theories of the mind. In particular,one of the most popular process theories derived from the FEP,active inference,describes a biologically plausible algorithmic implementation of this principle with several repercussions on our understanding of cognition. In this thesis, I will focus on the role of this process theory for action and perception. In active inference, the two of them are combined in a closed sensorimotor loopasco-dependent processes of minimisation of a single loss function,variational free energy, with respect to different sets of variables. Building on this, I will suggest that some of the core ideas of active inference are best seen in terms of enactive, embodied, extended and embedded (4E) theories, in contrast to the majority of the literature emphasising its apparent connections to more traditional, computational, accounts of the mind. In particular, I will develop this argument by focusing on some proposals central to 4E approaches: (a) the non-brain-centric nature of cognitive processes,(b)the lack of explicit representations of the world,(c)the coupling of agent-environment systems and (d) the necessity of real-time feedback signals from the environment. Under the FEP formulation, I will present a series of case studies with mainly two objectives in mind: 1) to conceptually analyse and reframe these 4E ideas in the context of active inference, arguing for the advantages of their formalisation in a more general probabilistic (Bayesian) framework and, 2) to present new mathematical models and agent-based implementations of some of the conceptual connections between Bayesian inference frameworks and 4E proposals, largely missing in the literature
[Commentary] Generative models as parsimonious descriptions of sensorimotor loops
The Bayesian brain hypothesis, predictive processing and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world. However, generative models need not be veridical representations of the environment. We suggest that they can (and should) be used to describe sensorimotor
relationships relevant for behaviour rather than precise accounts of the world
Embodied Skillful Performance: Where the Action Is
When someone masters a skill, their performance looks to us like second nature: it looks as if their actions are performed smoothly without explicit, knowledge-driven, online monitoring of their performance. Contemporary computational models in motor control theory, however, are instructionist. That is, they cast skilful performance as a knowledge-driven process, one that is driven by explicit motor representations of the action to be performed skillfully, which harness instructions for performance. Optimal control theory, a popular representative of such approaches, casts skillful performance as the execution of motor commands, the deliverances of a motor control system implemented by separable forward and inverse models that work in tandem with a state estimator to control the motor plant. These models rest on the principle that motor control is realized by the concerted action of separate modular subsystems, which transform an explicit motor representation into a sequence of physical movements. This paper aims to show the limitations of such instructionist approaches to skillful performance. Specifically, we address whether the assumption of modular knowledge-driven motor control in optimal control theory (based on motor commands computed by separable state estimators, forward models, and inverse models) is warranted. The first section of this paper examines the instructionist assumption, according to which skillful performance consists in the execution of instructions invested in motor representations. The second and third sections characterize the implementation of motor representations as motor commands, with a special focus on formulations from optimal control theory. The final sections of this paper examine predictive coding and active inference – behavioral modeling frameworks that descend, but are distinct, from optimal control theory – and argue that the instructionist assumption is ill-motivated in light of new developments in motor control theory, which cast motor control and motor planning as a form of (active) inference